19 research outputs found

    Predictors of Language Outcome for Children in the Ontario Infant Hearing Program

    Get PDF
    The Ontario Infant Hearing Program (OIHP) provides early interventions (i.e., hearing aids) to children who are hard of hearing (CHH) because research consistently demonstrates their benefit to language outcomes. The impact of pre-fitting language abilities on these outcomes are not well understood. This retrospective cohort analysis examined the performance of OIHP children on the Preschool Language Scale-4 at the time of (n=47), and after (n=19), initial hearing aid intervention. Regression analyses revealed that, before amplification, hearing loss severity predicted language abilities. However, after amplification, severity of hearing loss did not uniquely predict language achievement, but rather was driven by its relationship with language at the time of amplification. These findings suggest that hearing aids fitted early may provide a preservation benefit to the language achievement of CHH, and that this benefit is greatest for children at highest risk (i.e., children with the weakest initial language, and most severe hearing loss)

    Developing a Spoken Language Outcome Monitoring Procedure for a Canadian Early Hearing Detection and Intervention Program: Process and Recommendations

    Get PDF
    Abstract: Purpose: Routine spoken language outcome monitoring is one component of Early Hearing Detection and Intervention (EHDI) programs for children who are hard-of-hearing and learning a spoken language. However, there is no peer-reviewed research that documents how spoken language outcome monitoring may be achieved, or what processes EHDI programs can use to develop these procedures. The present paper describes the process used by a Canadian EHDI program, and the final recommendations that were developed from this process. Methodology: Through consultation with the program’s stakeholders, consideration of the Joint Committee on Infant Hearing’s recommendations, and drawing on our own expertise in spoken language assessment, we developed an overall framework for monitoring spoken language. Based on the needs of the EHDI program, we conducted a scoping review and critical appraisal of norm-referenced tests to identify candidate tests to use within this framework. Results: We recommended a two-pronged assessment approach to measuring spoken language outcomes, including program-level assessment and individual vulnerability testing. We identified several tests that have been previously used to measure spoken language outcomes. There was little consistency in how tests were used across studies with no clear indicators as to which tests are the most appropriate to accomplish for which outcome monitoring purposes. Conclusions: This paper reports on the framework and tests used by a Canadian EHDI program to accomplish spoken language outcome monitoring. We highlight different factors that need to be considered when designing spoken language outcome monitoring procedures and the complexity in doing so. Future work evaluating the effectiveness and feasibility of our recommendations is warranted

    Developing a Spoken Language Outcome Monitoring Procedure for Early Hearing Detection and Intervention Programs

    Get PDF
    Early Hearing Detection and Intervention programs are associated with improved spoken language outcomes for children who are deaf/hard-of-hearing. Best practice recommendations call for regular spoken language outcome monitoring to support decision making for all stakeholders (families, audiologists, speech-language pathologists, and program managers). Despite the clear calls for spoken language outcome monitoring, there is no peer-reviewed guidance as to how Early Hearing Detection and Intervention programs can best accomplish this monitoring. This dissertation evaluates the assumptions underlying spoken language outcome monitoring and contributes a new procedure developed for a Canadian Early Hearing Detection and Intervention program: the Ontario Infant Hearing Program. Whether decisions can be validly made using assessment data underpins the tenability of spoken language outcome monitoring. Chapter 2 considers test misuse across the profession of speech-language pathology from test design to clinical practice. I argue that a conceptual validity framework is one potential solution. This framework is applied throughout the dissertation. Chapter 3 aims to develop a spoken language outcome monitoring procedure to support the Ontario Infant Hearing Program. This chapter describes the process I engaged in, including a scoping review and critical appraisal of norm-referenced spoken language tests, to develop an outcome monitoring procedure for the Infant Hearing Program. Prior to implementing the recommended procedures province-wide, the Infant Hearing Program needed evidence as to whether the recommendations (a) meaningfully inform stakeholder decisions and (b) are feasible to implement. Chapter 4 reports on a pilot implementation of the recommended procedures and speech-language pathologists’ perceptions of it. During development of the procedure outlined in Chapter 3, one of the key vulnerabilities I recommended to monitor was early vocal development in children who are younger than 2 years. Chapter 5 is a survey study capturing the clinical questions speech-language pathologists’ have about early vocal development of children who are deaf/hard-of-hearing to inform future projects to assess the validity of candidate vocal development assessments. Overall, this dissertation contributes a spoken language outcome monitoring procedure for Early Hearing Detection and Intervention programs and highlights the tension between decisions, psychometrics, and implementation, in accomplishing spoken language outcome monitoring to inform best practice recommendations

    What do Speech-Language Pathologists want to know when assessing vocal development in children who are deaf/Hard-of-Hearing?

    Get PDF
    Abstract Purpose: Delays in vocal development are an early predictor of ongoing language difficulty for children who are deaf/hard-of-hearing (CDHH). Despite the importance of monitoring early vocal development in clinical practice, there are few suitable tools. This study aimed to identify the clinical decisions that speech-language pathologists (SLPs) most want to make when assessing vocal development and their current barriers to doing so. Method: 58 SLPs who provide services to CDHH younger than 22 months completed a survey. The first section measured potential barriers to vocal development assessment. The second section asked SLPs to rate the importance of 15 clinical decisions they could make about vocal development. Results: SLPs believed assessing vocal development was important for other stakeholders, and reported they had the necessary skills and knowledge to assess vocal development. Barriers primarily related to a lack of commercially available tests. SLPs rated all 15 clinical decisions as somewhat or very important. Their top 5 decisions included a variety of assessment purposes that tests are not typically designed to support, including measuring change, differential diagnosis, and goal setting. Conclusions: SLPs wish to make a number of clinical decisions when assessing vocal development in CDHH but lack access to appropriate tools to do so. Future work is needed to develop tools that are statistically equipped to fulfill these purposes. Understanding SLPs’ assessment purposes will allow future tests to better map onto the clinical decisions that SLPs need to make to support CDHH and their families and facilitate implementation into clinical practice

    Developing a Spoken Language Outcome Monitoring Procedure for a Canadian Early Hearing Detection and Intervention Program: Process and Recommendations

    Get PDF
    Abstract Purpose: Routine spoken language outcome monitoring is one component of Early Hearing Detection and Intervention (EHDI) programs for children who are hard of hearing and learning a spoken language. However, there is no peerreviewed research that documents how spoken language outcome monitoring may be achieved, or what processes EHDI programs can use to develop these procedures. The present article describes the process used by a Canadian EHDI program and the final recommendations that were developed from this process. Methodology: Through consultation with the program’s stakeholders, consideration of the Joint Committee on Infant Hearing’s recommendations, and drawing on our own expertise in spoken language assessment, we developed an overall framework for monitoring spoken language. Based on the needs of the EHDI program, we conducted a scoping review and critical appraisal of norm-referenced tests to identify candidate tests to use within this framework. Results: We recommended a two-pronged assessment approach to measuring spoken language outcomes, including program-level assessment and individual vulnerability testing. We identified several tests that have been previously used to measure spoken language outcomes. There was little consistency in how tests were used across studies with no clear indicators as to which tests are the most appropriate to accomplish for which outcome monitoring purposes. Conclusions: This article reports on the framework and tests used by a Canadian EHDI program to accomplish spoken language outcome monitoring. We highlight different factors that need to be considered when designing spoken language outcome monitoring procedures and the complexity in doing so. Future work evaluating the effectiveness and feasibility of our recommendations is warranted. Keywords: Spoken language outcome monitoring; Program Evaluation Acronyms: CASL = Comprehensive Assessment of Spoken Language; CDI = Child Development Inventory; CELF = Comprehensive Evaluation of Language Fundamentals; COSMIN = Consensus Based Standards for the Selection of Health Status Measurement Instruments; DEAP = Diagnostic Evaluation of Articulation and Phonology; EHDI = Early Hearing Detection and Intervention; EOWPVT = Expressive One Word Vocabulary Test; EVT = Expressive Vocabulary Test; GFTA = Goldman-Fristoe Test of Articulation; IHP = Infant Hearing Program; KLPA = Khan-Lewis Phonological Analysis; MBCDI = MacArthur Bates Communicative Development Inventories; (M)CDI = (Minnesota) Child Development Inventory; MSEL = Mullen Scales of Early Learning; PLAI = Preschool Language Assessment Inventory; PLS = Preschool Language Scale; PPVT= Peabody Picture Vocabulary Test; SLP = speech language pathologist; TACL = Test of Auditory Comprehension of Language, VABS = Vineland Adaptive Behavior Scales Acknowledgements: The authors have no conflicts of interest to declare. This work was funded by the Ontario Ministry of Children, Community and Social Services. The authors would like to thank the speech-language pathologists, audiologists, and program managers who contributed to the development of these procedures and recommendations. We would also like to thank Kelsi Breton for her work in evaluating articles for inclusion and exclusion. Correspondence concerning this article should be addressed to: Olivia Daub, Graduate Program in Health and Rehabilitation Sciences, The University of Western Ontario, Elborn College, London, Ontario, Canada, N6G 1H1. Email: [email protected]

    Usability and Feasibility of a Spoken Language Outcome Monitoring Procedure in a Canadian Early Hearing Detection & Intervention Program: Results of a 1-Year Pilot

    Get PDF
    Abstract Purpose: Best practice recommendations for Early Hearing Detection and Intervention (EHDI) programs include routine spoken language outcome monitoring. The present article reports on pilot data that evaluated the usability and feasibility of a spoken language outcome monitoring procedure developed for Ontario’s Infant Hearing Program (IHP). This procedure included both Program-level monitoring using omnibus language tests from birth to 6;0 and individual vulnerability monitoring of key domains of spoken language known to be at risk in children who are deaf/hard-of-hearing. Methodology: Speech-language pathologists (SLPs) in the IHP piloted the new procedures for one year and provided feedback on the procedure through surveys at the end of the pilot. Results: Data was suggestive that the Program-level procedure might be sensitive to change over time and known predictors of spoken language outcomes. Some, but not all, Program-level test scores were predicted by the presence of additional developmental factors. None of the test scores were significantly predicted by severity of hearing loss. Depending on the tests and scores used, some aspects of the Program-level procedure to change over time. There was insufficient evidence to support individual vulnerability monitoring. SLPs reported significant concerns about the time involved in implementing both procedures. Conclusions: This article describes preliminary evidence suggesting that the Program-level procedure might be feasible to implement and useful for evaluating EHDI programs. Future evaluations are needed to determine whether the procedure can be accurately implemented to scale in the IHP, and whether the data that results from the procedure can meaningfully inform stakeholders’ decision-making

    Implementing Evidence-Based Assessment Practices for the Monitoring of Spoken Language Outcomes in Children who are Deaf or Hard of Hearing in a Large Community Program

    Get PDF
    The purpose of this quality improvement pilot was to evaluate the effectiveness of an online learning module for (a) changing speech-language pathologists’ perceptions about outcome monitoring and assessment protocols for children who are deaf or hard of hearing and (b) supporting speech-language pathologists’ understanding of evidence-based protocols to be implemented in their community-based program. Using principles of integrated knowledge translation and the Ottawa Model of Research Use, an online learning module was designed to support the implementation of evidence-based assessment protocols for these children in a large publicly funded program in Ontario, Canada. A pre–post study was then conducted with 56 speech-language pathologists (56/73 who were invited, 77% response rate) who took a pre-module survey, completed the online learning module, and then immediately took a post-module survey. After completing the learning module, speech-language pathologists reported improved perceptions about outcome monitoring, good understanding of the procedures to be implemented, and intentions to implement the new procedures into practice. Implementation materials were rated as highly valuable. Online learning modules can be used to effectively translate evidence-based assessment procedures to speech-language pathologists. Developing interventions using theory and in collaboration with stakeholders can support the implementation of these types of procedures into practice

    A comment on test validation: The importance of the clinical perspective

    Get PDF
    © 2019 American Speech-Language-Hearing Association. Purpose: The misuse of standardized assessments has been a long-standing concern in speech-language pathology and traditionally viewed as an issue of clinician competency and training. The purpose of this article is to consider the contribution of communication breakdowns between test developers and the end users to this issue. Method: We considered the misuse of standardized assessments through the lens of the 2-communities theory, in which standardized tests are viewed as a product developed in 1 community (researchers/test developers) to be used by another community (frontline clinicians). Under this view, optimal test development involves a conversation to which both parties bring unique expertise and perspectives. Results: Consideration of the interpretations that standardized tests are typically validated to support revealed a mismatch between these and the interpretations and decisions that speech-language pathologists typically need to make. Test development using classical test theory, which underpins many of the tests in our field, contributes to this mismatch. Application of item response theory could better equip clinicians with the psychometric evidence to support the interpretations they desire but is not commonly found in the standardized tests used by speech-language pathologists. Conclusions: Advocacy and insistence on the consideration of clinical perspectives and decision making in the test validation process is a necessary part of our role. In improving the nature of the statistical evidence reported in standardized assessments, we can ensure these tools are appropriate to fulfill our professional obligations in a clinically feasible way

    Usability and Feasibility of a Spoken Language Outcome Monitoring Procedure in a Canadian Early Hearing Detection & Intervention Program: Results of a 1-Year Pilot

    Get PDF
    Purpose: Best practice recommendations for Early Hearing Detection and Intervention (EHDI) programs include routine spoken language outcome monitoring. The present article reports on pilot data that evaluated the usability and feasibility of a spoken language outcome monitoring procedure developed for Ontario’s Infant Hearing Program (IHP). This procedure included both Program-level monitoring using omnibus language tests from birth to 6 years of age and individual vulnerability monitoring of key domains of spoken language known to be at risk in children who are deaf or hard of hearing. Methodology: Speech-language pathologists (SLPs) in the IHP piloted the new procedures for one year and provided feedback on the procedure through surveys at the end of the pilot. Results: Data was suggestive that the Program-level procedure might be sensitive to change over time and known predictors of spoken language outcomes. Some, but not all, Program-level test scores were predicted by the presence of additional developmental factors. None of the test scores were significantly predicted by severity of hearing loss. Depending on the tests and scores used, some aspects of the Program-level procedure were sensitive to change over time. There was insufficient evidence to support individual vulnerability monitoring. SLPs reported significant concerns about the time involved in implementing both procedures. Conclusions: This article describes preliminary evidence suggesting that the Program-level procedure might be feasible to implement and useful for evaluating EHDI programs. Future evaluations are needed to determine whether the procedure can be accurately implemented to scale in the IHP, and whether the data that results from the procedure can meaningfully inform stakeholders’ decision-making

    Essential Content for Teaching Implementation Practice in Healthcare: A Mixed-Methods Study of Teams Offering Capacity-Building Initiatives

    Get PDF
    Background Applying the knowledge gained through implementation science can support the uptake of research evidence into practice; however, those doing and supporting implementation (implementation practitioners) may face barriers to applying implementation science in their work. One strategy to enhance individuals’ and teams’ ability to apply implementation science in practice is through training and professional development opportunities (capacity-building initiatives). Although there is an increasing demand for and offerings of implementation practice capacity-building initiatives, there is no universal agreement on what content should be included. In this study we aimed to explore what capacity-building developers and deliverers identify as essential training content for teaching implementation practice. Methods We conducted a convergent mixed-methods study with participants who had developed and/or delivered a capacity-building initiative focused on teaching implementation practice. Participants completed an online questionnaire to provide details on their capacity-building initiatives; took part in an interview or focus group to explore their questionnaire responses in depth; and offered course materials for review. We analyzed a subset of data that focused on the capacity-building initiatives’ content and curriculum. We used descriptive statistics for quantitative data and conventional content analysis for qualitative data, with the data sets merged during the analytic phase. We presented frequency counts for each category to highlight commonalities and differences across capacity-building initiatives. Results Thirty-three individuals representing 20 capacity-building initiatives participated. Study participants identified several core content areas included in their capacity-building initiatives: (1) taking a process approach to implementation; (2) identifying and applying implementation theories, models, frameworks, and approaches; (3) learning implementation steps and skills; (4) developing relational skills. In addition, study participants described offering applied and pragmatic content (e.g., tools and resources), and tailoring and evolving the capacity-building initiative content to address emerging trends in implementation science. Study participants highlighted some challenges learners face when acquiring and applying implementation practice knowledge and skills. Conclusions This study synthesized what experienced capacity-building initiative developers and deliverers identify as essential content for teaching implementation practice. These findings can inform the development, refinement, and delivery of capacity-building initiatives, as well as future research directions, to enhance the translation of implementation science into practice
    corecore